Notebook to visualize SWOT longitudinal profile data, and modify vertical datum and units¶

The SWOT satellite measures water surface elevation, width and slope. It was launched in December 2022. It sees nearly all global rivers and lakes. For higher-latitude places such as Alaska, places are usually 3 or 4 times per 21 day cycle.

SWOT

Key documents: https://podaac.jpl.nasa.gov/SWOT

The SWOT river data are organized by unique reachids.

The workflow in the notebook contains cells that run Python code that do the following:

  1. Set up the compute environment by importing software packages
  2. Identifying the reachid you are interested in, by going to the "SWORD Explorer" website
  3. Enter the period of interest for pulling longprofiles
  4. Show long profile and manipulate units and vertical datum

To do

  • Map nodes
  • Add datum conversinon with vdatum API

1 Set up environment¶

The Python cells below need to be run each time the notebook is executed. The set up the needed libraries to run here in CUAHSI's Jupyter Hub cloud.

In [1]:
#we'll use the plotly library to show the data. other libraries are in the Utilities.py file
import plotly.express as px
In [2]:
# these two functions pull reach timeseris and long profiles respectively
from Utilities import PullReachTimeseries, PullLongitudinalProfile

2 Find reach of interest¶

To choose a reach to analyze, go to SWORD Explorer: https://www.swordexplorer.com

To view Alaska, you must first click on the "81" basin. Then you should see a map that looks like this. Zoom in, and click on the reach you are interested in, and you'll see the reachid pop up.

SWORD

In [3]:
# define reach
reachid='81246000021' # this is the Nenana River at Nenana

3 Define time period of interest¶

This will pull and show a timeseries of SWOT overpasses for a reach, by displaying a timeseries of water elevations.

In [4]:
df=PullReachTimeseries(reachid)
waiting for SWOT data to download...
... done. Successfully pulled SWOT data and put in dictionary
In [5]:
# Extract representative x (longitude) and y (latitude) to use for vdatum api to convert vertical datum
x = df.p_lon.unique().item()
y = df.p_lat.unique().item()
In [6]:
# TODO : Currently this is just a placeholder for api for testing purpose. DO NOT USE IN PRODUCTION ENVIRONMENT
# https://vdatum.noaa.gov/docs/services.html
# Query vdatum api to calculate offset between two vertical datums
import requests
import json
vdatum_url = f"https://vdatum.noaa.gov/vdatumweb/api/convert?region=ak&s_x={x}&s_y={y}&s_v_geoid=egm2008&t_v_frame=NAVD88&t_v_geoid=geoid12b"
res = requests.get(vdatum_url)
# load data into a dictionary
data=json.loads(res.text)
offset = float(data["t_z"])  # this is the offset between two datums. Add/subtract
offset
Out[6]:
1.741
In [7]:
# Apply the datum conversion offset
df["wse"] = df["wse"] + offset  # double check if this should be added or subtracted
In [8]:
# plot swot data as a timeseries
px.line(df,x='time_str',y='wse',
       labels={"time_str": "",
               "wse": "wse[m]"},
        markers=True)

This data has been filtered to include quality flags 0 and 1 (good and suspect).

You should notice that some observations look a bit more suspect than others.

A more sophisticated filter might be able to remove datapoints such as on September 16, but this is shown here to remember to apply sanity checks at all times. Notice too that I have not removed ice flagged data.

There was a processing update in October 2024, and data since then has looked a bit better.

Look at above plot and choose a day where there is data to analyze long profile.

From above, I am interested in seeing the profile August 4. It does not look abnormal, and it is in an ice-free time

In [9]:
# save the day as yyyy-mm-dd. 
tlong='2024-08-04'

Notebook cells below will pull a long profile for this day

Pull longitudinal profile¶

In [10]:
longdf=PullLongitudinalProfile(reachid,tlong)
pulling SWOT node data. this takes about 60 seconds...
... done!
In [11]:
# Apply the datum conversion offset
longdf["wse"] = longdf["wse"] + offset  # double check if this should be added or subtracted
In [12]:
px.line(longdf,x='p_dist_out',y='wse',
       labels={"p_dist_out": "Distance to outlet [m]",
               "wse": "wse[m]"},
        markers=True)

The distance to outlet data are not easy to parse

Let's plot distance as kilometers to Tanana confluence instead

In [13]:
longdf['dist_up_conf']=(longdf['p_dist_out']-longdf['p_dist_out'].min())/1000.
In [14]:
px.line(longdf,x='dist_up_conf',y='wse',
       labels={"dist_up_conf": "Distance to Tanana confluence [km]",
               "wse": "wse[m]"},
        markers=True)
In [15]:
# convert to English units
longdf['wse [ft]']=longdf['wse']/.3048
longdf['dist_up_conf [mi]']=longdf['dist_up_conf']*.62
In [16]:
px.line(longdf,x='dist_up_conf [mi]',y='wse [ft]',
       labels={"dist_up_conf [mi]": "Distance to Tanana confluence [mi]",
               "wse": "wse[feet]"},
        markers=True)
In [ ]:
 
In [ ]: